217 research outputs found

    Exploring Spatio-Temporal Representations by Integrating Attention-based Bidirectional-LSTM-RNNs and FCNs for Speech Emotion Recognition

    Get PDF
    Automatic emotion recognition from speech, which is an important and challenging task in the field of affective computing, heavily relies on the effectiveness of the speech features for classification. Previous approaches to emotion recognition have mostly focused on the extraction of carefully hand-crafted features. How to model spatio-temporal dynamics for speech emotion recognition effectively is still under active investigation. In this paper, we propose a method to tackle the problem of emotional relevant feature extraction from speech by leveraging Attention-based Bidirectional Long Short-Term Memory Recurrent Neural Networks with fully convolutional networks in order to automatically learn the best spatio-temporal representations of speech signals. The learned high-level features are then fed into a deep neural network (DNN) to predict the final emotion. The experimental results on the Chinese Natural Audio-Visual Emotion Database (CHEAVD) and the Interactive Emotional Dyadic Motion Capture (IEMOCAP) corpora show that our method provides more accurate predictions compared with other existing emotion recognition algorithms

    A hierarchical attention network-based approach for depression detection from transcribed clinical interviews

    Get PDF
    The high prevalence of depression in society has given rise to a need for new digital tools that can aid its early detection. Among other effects, depression impacts the use of language. Seeking to exploit this, this work focuses on the detection of depressed and non-depressed individuals through the analysis of linguistic information extracted from transcripts of clinical interviews with a virtual agent. Specifically, we investigated the advantages of employing hierarchical attention-based networks for this task. Using Global Vectors (GloVe) pretrained word embedding models to extract low-level representations of the words, we compared hierarchical local-global attention networks and hierarchical contextual attention networks. We performed our experiments on the Distress Analysis Interview Corpus - Wizard of Oz (DAIC-WoZ) dataset, which contains audio, visual, and linguistic information acquired from participants during a clinical session. Our results using the DAIC-WoZ test set indicate that hierarchical contextual attention networks are the most suitable configuration to detect depression from transcripts. The configuration achieves an Unweighted Average Recall (UAR) of .66 using the test set, surpassing our baseline, a Recurrent Neural Network that does not use attention.Funding by EU- sustAGE (826506), EU-RADAR-CNS (115902), Key Program of the Natural Science Foundation of Tianjin, CHINA (18JCZDJC36300) and BMW Group Research Pages 221-225 https://www.isca-speech.org/archive/Interspeech_2019/index.htm

    Attention-enhanced connectionist temporal classification for discrete speech emotion recognition

    Get PDF
    Discrete speech emotion recognition (SER), the assignment of a single emotion label to an entire speech utterance, is typically performed as a sequence-to-label task. This approach, however, is limited, in that it can result in models that do not capture temporal changes in the speech signal, including those indicative of a particular emotion. One potential solution to overcome this limitation is to model SER as a sequence-to-sequence task instead. In this regard, we have developed an attention-based bidirectional long short-term memory (BLSTM) neural network in combination with a connectionist temporal classification (CTC) objective function (Attention-BLSTM-CTC) for SER. We also assessed the benefits of incorporating two contemporary attention mechanisms, namely component attention and quantum attention, into the CTC framework. To the best of the authors’ knowledge, this is the first time that such a hybrid architecture has been employed for SER.We demonstrated the effectiveness of our approach on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) and FAU-Aibo Emotion corpora. The experimental results demonstrate that our proposed model outperforms current state-of-the-art approaches.The work presented in this paper substantially supported by the National Natural Science Foundation of China (Grant No. 61702370), the Key Program of the Natural Science Foundation of Tianjin (Grant No. 18JCZDJC36300), the Open Projects Program of the National Laboratory of Pattern Recognition, and the Senior Visiting Scholar Program of Tianjin Normal University. Interspeech 2019 ISSN: 1990-977

    Hierarchical attention transfer networks for depression assessment from speech

    Get PDF

    Frustration recognition from speech during game interaction using wide residual networks

    Get PDF
    ABSTRACT Background Although frustration is a common emotional reaction during playing games, an excessive level of frustration can harm users’ experiences, discouraging them from undertaking further game interactions. The automatic detection of players’ frustration enables the development of adaptive systems, which through a real-time difficulty adjustment, would adapt the game to the user’s specific needs; thus, maximising players experience and guaranteeing the game success. To this end, we present our speech-based approach for the automatic detection of frustration during game interactions, a specific task still under-explored in research. Method The experiments were performed on the Multimodal Game Frustration Database (MGFD), an audiovisual dataset—collected within the Wizard-of-Oz framework—specially tailored to investigate verbal and facial expressions of frustration during game interactions. We explored the performance of a variety of acoustic feature sets, including Mel-Spectrograms and Mel-Frequency Cepstral Coefficients (MFCCs), as well as the low dimensional knowledge-based acoustic feature set eGeMAPS. Due to the always increasing improvements achieved by the use of Convolutional Neural Networks (CNNs) in speech recognition tasks, unlike the MGFD baseline—based on Long Short-Term Memory (LSTM) architecture and Support Vector Machine (SVM) classifier—in the present work we take into consideration typically used CNNs, including ResNets, VGG, and AlexNet. Furthermore, given the still open debate on the shallow vs deep networks suitability, we also examine the performance of two of the latest deep CNNs, i. e., WideResNets and EfficientNet. Results Our best result, achieved with WideResNets and Mel-Spectrogram features, increases the system performance from 58.8 % Unweighted Average Recall (UAR) to 93.1 % UAR for speech-based automatic frustration recognition
    • …
    corecore